Meet the Team

Strategy

Obstacle Avoidance Algorithm

Our robot used a version of the bug algorithms with a state machine to make movement decisions. We had our robot travel to set way-points while along the way searching for any obstacles or golf balls that were orange and blue. Our priority for these tasks listed ball detection as the most important, obstacle avoidance as second most important, and traveling to the set points as the last command to follow. We were able to avoid obstacles of other colors by carefully finding the values for the orange and blue balls in the HSV color space and running a simple computer vision algorithm on the images taken by the robots camera.

In each iteration of our RobotControl method, the LADAR sensor was utilized by taking its readings to produce a front, left, and right minimum. The front minimum was compared with a threshold to determine whether or not an obstacle was in its path. If there was, the robot would use the right and left minimum to calculate the path of least obstruction and continue traveling in that direction. To solve the problem of when to break off wall following to go to one of the goal points, the angle between the robot and the goal point was calculated using two vectors: one vector representing directly in front of the robot and the other representing the vector from the robot's position to the goal point. If this angle was within a small threshold, the robot would know to break off.

The robots position was known to us at all times because we utilized the OptiTrack Motion Capture System. This essentially allowed us access to the knowledge of the robots x and y position within the course. This allowed us to solve inconviniences such as to not pick up balls when outside the course. Also, it allowed us to send information such as the ball ID x and y position to LabView. In addition, we also used the robot's gryoscope readings to determine the orientation of our robot. This was especially useful when determining which direction the robot was facing. The robot also had a vision camera mounted on the front. This camera allowed us to detect blue and orange balls and calculated the centroid of these objects. This information was used to know when to enter our ball detection algorithm.

Gripper Design and Ball Detection Algorithm

The gripper of the robot was designed to be simple, yet effective. It contains two servos and four plates of aluminum, all of which are located on the front of the robot and are mounted to a white, plastic plate that was provided to each project team to use as a basis for their gripper design.

Two of the four aluminum pieces are attached to each side of the plastic plate by two screws and act as fixed walls to prevent the balls from rolling out the sides of the gripper's hopper. The two other aluminum plates are attached to the two servos. One of the servos is mounted on the front of the plastic plate. Attached to the servo is an aluminum piece that spans the front of the hopper and acts as a gate. When the servo rotates, the position of the gate is changed, allowing or restricting access to the gripper's hopper. The fourth aluminum plate divides the hopper into two sections: one for the blue golf balls and the other for the orange golf balls. The plate is attached to a servo that rotates depending on the color of golf ball that is being captured by the robot. The servo rotates the divider to fully open the side corresponding to the color of ball that is being targeted, which ensures the ball is stored on the correct side of the hopper. Additionally, when a specific color of balls are being deposited into a chute, the servo rotates so that only the correct color of balls can exit the hopper. Then, the gate on the front of the gripper is opened and the robot reverses, causing the balls to exit the gripper's hopper into the correct chute.

See Jacquelin in Action